Goto

Collaborating Authors

 AAAI AI-Alert Ethics for Oct 27, 2020


A Practical Guide to Building Ethical AI

#artificialintelligence

Companies are leveraging data and artificial intelligence to create scalable solutions -- but they're also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.


Disassembly Required -- Real Life

#artificialintelligence

HitchBot, a friendly-looking talking robot with a bucket for a body and pool-noodle limbs, first arrived on American soil back in 2015. This "hitchhiking" robot was an experiment by a pair of Canadian researchers who wanted to investigate people's trust in, and attitude towards, technology. The researchers wanted to see "whether a robot could hitchhike across the country, relying only on the goodwill and help of strangers." With rudimentary computer vision and a limited vocabulary but no independent means of locomotion, HitchBot was fully dependent on the participation of willing passers-by to get from place to place. Fresh off its successful journey across Canada, where it also picked up a fervent social media following, HitchBot was dropped off in Massachusetts and struck out towards California. But HitchBot never made it to the Golden State.


'Reasonable Explainability' for Regulating AI in Health

#artificialintelligence

Emerging technology is slowly finding a place in developing countries for its potential to plug gaps in ailing public service systems, such as healthcare. At the same time, cases of bias and discrimination that overlap with the complexity of algorithms have created a trust problem with technology. Promoting transparency in algorithmic decision-making through explainability can be pivotal in addressing the lack of trust with medical artificial intelligence (AI), but this comes with challenges for providers and regulators. In generating explainability, AI providers need to prioritise their accountability to patient safety given that the most accurate of algorithms are still opaque. There are also additional costs involved. Regulators looking to facilitate the entry of innovation while prioritising patient safety will need to look into ascertaining a reasonable level of explainability considering risk factors and the context of its use, and adaptive and experimental means of regulation. Artificial intelligence (AI) models across the globe have come under the scanner over ethical issues; for instance, Amazon's hiring algorithm reportedly discriminates against women,[1] and there is evidence of racial bias in the facial recognition software used by law enforcement in the United States (US).[2] While biased AI has various implications, concerns around the use of AI in ethically sensitive industries, such as healthcare, justifiably require closer examination. Medical AI models have become more commonplace in clinical and healthcare settings due to their higher accuracy and lower turnaround time and cost in comparison to non-AI techniques.